<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPAW/3M5KB8P</identifier>
		<repository>sid.inpe.br/sibgrapi/2016/07.22.20.50</repository>
		<lastupdate>2016:07.22.20.50.36 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2016/07.22.20.50.36</metadatarepository>
		<metadatalastupdate>2022:06.14.00.08.37 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2016}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI.2016.035</doi>
		<citationkey>SouzaArJrCoNaKeGu:2016:DeNuFe</citationkey>
		<title>Decreasing the Number of Features for Improving Human Action Classification</title>
		<format>On-line</format>
		<year>2016</year>
		<numberoffiles>1</numberoffiles>
		<size>2519 KiB</size>
		<author>Souza, Kleber Jacques de,</author>
		<author>Araujo, Arnaldo de Albuquerque,</author>
		<author>Jr, Zenilton Kleber G. do Patrociinio,</author>
		<author>Cousty, Jean,</author>
		<author>Najman, Laurent,</author>
		<author>Kenmochi, Yukiko,</author>
		<author>Guimaraes, Silvio Jamil F.,</author>
		<affiliation>NPDI/DCC/UFMG - Federal University of Minas Gerais - Computer Science Department - Belo Horizonte, MG, Brazil</affiliation>
		<affiliation>NPDI/DCC/UFMG - Federal University of Minas Gerais - Computer Science Department - Belo Horizonte, MG, Brazil</affiliation>
		<affiliation>Audio-Visual Information Proc. Lab. (VIPLAB) - Computer Science Department -- ICEI -- PUC Minas</affiliation>
		<affiliation>Universite Paris-Est, Laboratoire d'Informatique Gaspard-Monge UMR 8049, UPEMLV, ESIEE Paris, ENPC, CNRS, F-93162 Noisy-le-Grand France</affiliation>
		<affiliation>Universite Paris-Est, Laboratoire d'Informatique Gaspard-Monge UMR 8049, UPEMLV, ESIEE Paris, ENPC, CNRS, F-93162 Noisy-le-Grand France</affiliation>
		<affiliation>Universite Paris-Est, Laboratoire d'Informatique Gaspard-Monge UMR 8049, UPEMLV, ESIEE Paris, ENPC, CNRS, F-93162 Noisy-le-Grand France</affiliation>
		<affiliation>Audio-Visual Information Proc. Lab. (VIPLAB) - Computer Science Department -- ICEI -- PUC Minas</affiliation>
		<editor>Aliaga, Daniel G.,</editor>
		<editor>Davis, Larry S.,</editor>
		<editor>Farias, Ricardo C.,</editor>
		<editor>Fernandes, Leandro A. F.,</editor>
		<editor>Gibson, Stuart J.,</editor>
		<editor>Giraldi, Gilson A.,</editor>
		<editor>Gois, João Paulo,</editor>
		<editor>Maciel, Anderson,</editor>
		<editor>Menotti, David,</editor>
		<editor>Miranda, Paulo A. V.,</editor>
		<editor>Musse, Soraia,</editor>
		<editor>Namikawa, Laercio,</editor>
		<editor>Pamplona, Mauricio,</editor>
		<editor>Papa, João Paulo,</editor>
		<editor>Santos, Jefersson dos,</editor>
		<editor>Schwartz, William Robson,</editor>
		<editor>Thomaz, Carlos E.,</editor>
		<e-mailaddress>silvio.jamil@gmail.com</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 29 (SIBGRAPI)</conferencename>
		<conferencelocation>São José dos Campos, SP, Brazil</conferencelocation>
		<date>4-7 Oct. 2016</date>
		<publisher>IEEE Computer Society´s Conference Publishing Services</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>Spatio-temporal video segmentation, human action classification, BossaNova representation.</keywords>
		<abstract>Action classification in videos has been a very active field of research over the past years. Human action classification is a research field with application to various areas such as video indexing, surveillance, human-computer interfaces, among others. In this paper, we propose a strategy based on decreasing the number of features in order to improve accuracy in the human action classification task. Thus, to classify human action, we firstly compute a video segmentation for simplifying the visual information, in the following, we use a mid-level representation for representing the feature vectors which are finally classified. Experimental results demonstrate that our approach has improved the quality of human action classification in comparison to the baseline while using 60% less features.</abstract>
		<language>en</language>
		<targetfile>PID4373569.pdf</targetfile>
		<usergroup>silvio.jamil@gmail.com</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPAW/3M2D4LP</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2016/07.02.23.50 6</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2016/07.22.20.50</url>
	</metadata>
</metadatalist>